477 research outputs found

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities

    UnCommonSense: Informative Negative Knowledge about Everyday Concepts

    Get PDF
    Commonsense knowledge about everyday concepts is an important asset for AIapplications, such as question answering and chatbots. Recently, we have seenan increasing interest in the construction of structured commonsense knowledgebases (CSKBs). An important part of human commonsense is about properties thatdo not apply to concepts, yet existing CSKBs only store positive statements.Moreover, since CSKBs operate under the open-world assumption, absentstatements are considered to have unknown truth rather than being invalid. Thispaper presents the UNCOMMONSENSE framework for materializing informativenegative commonsense statements. Given a target concept, comparable conceptsare identified in the CSKB, for which a local closed-world assumption ispostulated. This way, positive statements about comparable concepts that areabsent for the target concept become seeds for negative statement candidates.The large set of candidates is then scrutinized, pruned and ranked byinformativeness. Intrinsic and extrinsic evaluations show that our methodsignificantly outperforms the state-of-the-art. A large dataset of informativenegations is released as a resource for future research.<br

    UnCommonSense: Informative Negative Knowledge about Everyday Concepts

    Get PDF
    Commonsense knowledge about everyday concepts is an important asset for AIapplications, such as question answering and chatbots. Recently, we have seenan increasing interest in the construction of structured commonsense knowledgebases (CSKBs). An important part of human commonsense is about properties thatdo not apply to concepts, yet existing CSKBs only store positive statements.Moreover, since CSKBs operate under the open-world assumption, absentstatements are considered to have unknown truth rather than being invalid. Thispaper presents the UNCOMMONSENSE framework for materializing informativenegative commonsense statements. Given a target concept, comparable conceptsare identified in the CSKB, for which a local closed-world assumption ispostulated. This way, positive statements about comparable concepts that areabsent for the target concept become seeds for negative statement candidates.The large set of candidates is then scrutinized, pruned and ranked byinformativeness. Intrinsic and extrinsic evaluations show that our methodsignificantly outperforms the state-of-the-art. A large dataset of informativenegations is released as a resource for future research.<br

    The next frontier: Fostering innovation by improving health data access and utilization

    Get PDF
    Beneath most lively policy debates sit dry-as-dust theoretical and methodological discussions. Current disputes over the EU Adaptive Pathways initiative and the proposed US 21st Century Cures Act may ultimately rest on addressing arcane issues of data curation, standardization, and utilization. Improved extraction of inform ation on the safety and effectiveness of drugs-in-use must parallel adjustments in evidence requirements at the time of licensing. To do otherwise may compromise safety and efficacy in the name of fostering innovation

    Deep neural networks allow expert-level brain meningioma segmentation and present potential for improvement of clinical practice

    Get PDF
    Accurate brain meningioma segmentation and volumetric assessment are critical for serial patient follow-up, surgical planning and monitoring response to treatment. Current gold standard of manual labeling is a time-consuming process, subject to inter-user variability. Fully-automated algorithms for meningioma segmentation have the potential to bring volumetric analysis into clinical and research workflows by increasing accuracy and efficiency, reducing inter-user variability and saving time. Previous research has focused solely on segmentation tasks without assessment of impact and usability of deep learning solutions in clinical practice. Herein, we demonstrate a three-dimensional convolutional neural network (3D-CNN) that performs expert-level, automated meningioma segmentation and volume estimation on MRI scans. A 3D-CNN was initially trained by segmenting entire brain volumes using a dataset of 10,099 healthy brain MRIs. Using transfer learning, the network was then specifically trained on meningioma segmentation using 806 expert-labeled MRIs. The final model achieved a median performance of 88.2% reaching the spectrum of current inter-expert variability (82.6-91.6%). We demonstrate in a simulated clinical scenario that a deep learning approach to meningioma segmentation is feasible, highly accurate and has the potential to improve current clinical practice

    Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery

    Full text link
    Despite growing interest in using large language models (LLMs) in healthcare, current explorations do not assess the real-world utility and safety of LLMs in clinical settings. Our objective was to determine whether two LLMs can serve information needs submitted by physicians as questions to an informatics consultation service in a safe and concordant manner. Sixty six questions from an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple prompts. 12 physicians assessed the LLM responses' possibility of patient harm and concordance with existing reports from an informatics consultation service. Physician assessments were summarized based on majority vote. For no questions did a majority of physicians deem either LLM response as harmful. For GPT-3.5, responses to 8 questions were concordant with the informatics consult report, 20 discordant, and 9 were unable to be assessed. There were 29 responses with no majority on "Agree", "Disagree", and "Unable to assess". For GPT-4, responses to 13 questions were concordant, 15 discordant, and 3 were unable to be assessed. There were 35 responses with no majority. Responses from both LLMs were largely devoid of overt harm, but less than 20% of the responses agreed with an answer from an informatics consultation service, responses contained hallucinated references, and physicians were divided on what constitutes harm. These results suggest that while general purpose LLMs are able to provide safe and credible responses, they often do not meet the specific information need of a given question. A definitive evaluation of the usefulness of LLMs in healthcare settings will likely require additional research on prompt engineering, calibration, and custom-tailoring of general purpose models.Comment: 27 pages including supplemental informatio
    corecore